174 research outputs found

    An exploration of semiotics of new auditory displays: A comparative analysis with visual displays

    Get PDF
    Communicability is an important factor of user interfaces. To address communicability, extensive research has been done on visual displays, whereas relatively little research has been done on auditory displays. The present paper attempts to analyze semiotics of novel auditory displays (spearcon, spindex, and lyricon) using Peirce’s classification of signs: icon, symbol, and index. After the aesthetic developmental patterns of the visual counterparts are presented, semiotics of auditory cues is discussed with future design directions

    Multimodal interaction in conneted automated vehicles

    Get PDF
    Electric vehicles and automated vehicles are getting more pervasive in our everyday life. Ideally, fully automated vehicles that drivers can completely trust would be the best solution. However, due to technical limitations and human factors issues, fully automated vehicles are still under test, and no concrete evidence has yet shown their functionalities are superior to human cognition and operation. In the Mind Music Machine Lab, we are actively conducting research on connected and automated vehicles, mainly using driving simulators. This talk specifically focuses on multimodal interactions between a driver and a vehicle as well as the driver and nearby drivers. In this autonomous driving context, we facilitate the collaborative driving by estimating the driver’s cognitive and affective states using multiple sensors (e.g., computer vision, physiological devices) and by communicating via auditory and gestural channels. Future works include refining our designs for diverse populations, including drivers with difficulties/disabilities, passengers, pedestrians, etc.https://digitalcommons.mtu.edu/techtalks/1028/thumbnail.jp

    "Spindex" (speech index) enhances menu navigation user experience of touch screen devices in various input gestures: tapping, wheeling, and flicking

    Get PDF
    In a large number of electronic devices, users interact with the system by navigating through various menus. Auditory menus can complement or even replace visual menus, so research on auditory menus has recently increased with mobile devices as well as desktop computers. Despite the potential importance of auditory displays on touch screen devices, little research has been attempted to enhance the effectiveness of auditory menus for those devices. In the present study, I investigated how advanced auditory cues enhance auditory menu navigation on a touch screen smartphone, especially for new input gestures such as tapping, wheeling, and flicking methods for navigating a one-dimensional menu. Moreover, I examined if advanced auditory cues improve user experience, not only for visuals-off situations, but also for visuals-on contexts. To this end, I used a novel auditory menu enhancement called a "spindex" (i.e., speech index), in which brief audio cues inform the users of where they are in a long menu. In this study, each item in a menu was preceded by a sound based on the item's initial letter. One hundred and twenty two undergraduates navigated through an alphabetized list of 150 song titles. The study was a split-plot design with manipulated auditory cue type (text-to-speech (TTS) alone vs. TTS plus spindex), visual mode (on vs. off), and input gesture style (tapping, wheeling, and flicking). Target search time and subjective workload for the TTS + spindex were lower than those of the TTS alone in all input gesture types regardless of visual type. Also, on subjective ratings scales, participants rated the TTS + spindex condition higher than the plain TTS on being 'effective' and 'functionally helpful'. The interaction between input methods and output modes (i.e., auditory cue types) and its effects on navigation behaviors was also analyzed based on the two-stage navigation strategy model used in auditory menus. Results were discussed in analogy with visual search theory and in terms of practical applications of spindex cues.M.S.Committee Chair: Bruce N. Walker; Committee Member: Frank Durso; Committee Member: Gregory M. Cors

    Robotic arts: Current practices, potentials, and implications

    Get PDF
    Given that the origin of the “robot” comes from efforts to create a worker to help people, there has been relatively little research on making a robot for non-work purposes. However, some researchers have explored robotic arts since Leonardo da Vinci. Many questions can be posed regarding the potentials of robotic arts: (1) Is there anything we can call machine-creativity? (2) Can robots improvise artworks on the fly? and (3) Can art robots pass the Turing test? To ponder these questions and see the current status quo of robotic arts, the present paper surveys the contributions of robotics in diverse forms of arts, including drawing, theater, music, and dance. The present paper describes selective projects in each genre, core procedure, possibilities and limitations within the aesthetic computing framework. Then, the paper discusses implications of these robotic arts in terms of both robot research and art research, followed by conclusions including answers to the questions posed at the outset

    A survey on hardware and software solutions for multimodal wearable assistive devices targeting the visually impaired

    Get PDF
    The market penetration of user-centric assistive devices has rapidly increased in the past decades. Growth in computational power, accessibility, and cognitive device capabilities have been accompanied by significant reductions in weight, size, and price, as a result of which mobile and wearable equipment are becoming part of our everyday life. In this context, a key focus of development has been on rehabilitation engineering and on developing assistive technologies targeting people with various disabilities, including hearing loss, visual impairments and others. Applications range from simple health monitoring such as sport activity trackers, through medical applications including sensory (e.g. hearing) aids and real-time monitoring of life functions, to task-oriented tools such as navigational devices for the blind. This paper provides an overview of recent trends in software and hardware-based signal processing relevant to the development of wearable assistive solutions

    Subject assessment of in-vehicle auditory warnings for rail grade crossings

    Get PDF
    Human factors research has played an important role in reducing the incidents of vehicle-train collisions at rail grade crossings over the past 30 years. With the growing popularity of in-vehicle infotainment systems and GPS devices, new opportunities arise to cost-efficiently and effectively alert drivers of railroad crossings and to promote safer driving habits. To best utilize this in-vehicle technology, 32 auditory warnings (16 verbal, 7 train-related auditory icons, and 9 generic earcons) were generated and presented to 31 participants after a brief low-fidelity driving simulation. Participants rated each sound on eight dimensions deemed important in previous auditory warning literature. Preliminary results and possible interpretations are discussed

    Regulating drivers’ aggressiveness by Sonifying emotional data

    Get PDF
    There have been efforts within the area of cognitive and behavioral sciences to mitigate drivers’ emotion to decrease the associated traffic accidents, injuries, fatalities, and property damage. In this study, we targeted aggressive drivers and try to regulate their emotion through sonifying their emotional data. Results are discussed with an affect regulation model and future research

    17 Human-Car confluence: “Socially-Inspired driving mechanisms”

    Get PDF
    With self-driving vehicles announced for the 2020s, today’s challenges in Intelligent Transportation Systems (ITS) lie in problems related to negotiation and decision making in (spontaneously formed) car collectives. Due to the close coupling and interconnectedness of the involved driver-vehicle entities, effects on the local level induced by cognitive capacities, behavioral patterns, and the social context of drivers, would directly cause changes on the macro scale. To illustrate, a driver’s fatigue or emotion can influence a local driver-vehicle feedback loop, which is directly translated into his or her driving style, and, in turn, can affect driving styles of all nearby drivers. These transitional, yet collective driver state and driving style changes raise global traffic phenomena like jams, collective aggressiveness, etc. To allow harmonic coexistence of autonomous and self-driven vehicles, we investigate in this chapter the effects of socially-inspired driving and discuss the potential and beneficial effects its application should have on collective traffic

    Robotic motion learning framework to promote social engagement

    Get PDF
    Abstract Imitation is a powerful component of communication between people, and it poses an important implication in improving the quality of interaction in the field of human–robot interaction (HRI). This paper discusses a novel framework designed to improve human–robot interaction through robotic imitation of a participant’s gestures. In our experiment, a humanoid robotic agent socializes with and plays games with a participant. For the experimental group, the robot additionally imitates one of the participant’s novel gestures during a play session. We hypothesize that the robot’s use of imitation will increase the participant’s openness towards engaging with the robot. Experimental results from a user study of 12 subjects show that post-imitation, experimental subjects displayed a more positive emotional state, had higher instances of mood contagion towards the robot, and interpreted the robot to have a higher level of autonomy than their control group counterparts did. These results point to an increased participant interest in engagement fueled by personalized imitation during interaction

    “Musical Exercise” for people with visual impairments: A preliminary study with the blindfolded

    Get PDF
    Performing independent physical exercise is critical to maintain one\u27s good health, but it is specifically hard for people with visual impairments. To address this problem, we have developed a Musical Exercise platform for people with visual impairments so that they can perform exercise in a good form consistently. We designed six different conditions, including blindfolded or visual without audio conditions, and blindfolded or visual with two different types of audio feedback (continuous vs. discrete) conditions. Eighteen sighted participants participated in the experiment, by doing two exercises - squat and wall sit with all six conditions. The results show that Musical Exercise is a usable exercise assistance system without any adverse effect on exercise completion time or perceived workload. Also, the results show that with a specific sound design (i.e., discrete), participants in the blindfolded condition can do exercise as consistently as participants in the non-blindfolded condition. This implies that not all sounds equally work and thus, care is required to refine auditory displays. Potentials and limitations of Musical Exercise and future works are discussed with the results
    • …
    corecore